Error-in-Variables Jump Regression Using Local Clustering
نویسندگان
چکیده
منابع مشابه
Clustering Via Local Regression
This paper deals with the local learning approach for clustering, which is based on the idea that in a good clustering, the cluster label of each data point can be well predicted based on its neighbors and their cluster labels. We propose a novel local learning based clustering algorithm using kernel regression as the local label predictor. Although sum of absolute error is used instead of sum ...
متن کاملA Local Polynomial Jump Detection Algorithm in Nonparametric Regression
We suggest a one dimensional jump detection algorithm based on local polynomial tting for jumps in regression functions (zero-order jumps) or jumps in derivatives ((rst-order or higher-order jumps). If jumps exist in the m-th order derivative of the underlying regression function, then an (m + 1) order polynomial is tted in a neighborhood of each design point. We then characterize the jump info...
متن کاملEstimation of observation-error variance in errors-in-variables regression
Assessing the variability of an estimator is a key component of the process of statistical inference. In nonparametric regression, estimating observation-error variance is the principal ingredient needed to estimate the variance of the regression mean. Although there is an extensive literature on variance estimation in nonparametric regression, the techniques developed in conventional settings ...
متن کاملDiscussion of ‘ Correlated variables in regression : clustering and sparse estimation
We would like to begin by congratulating the authors on their fine paper. Handling highly correlated variables is one of the most important issues facing practitioners in highdimensional regression problems, and in some ways it is surprising that it has not received more attention up to this point. The authors have made substantial progress towards practical methodological proposals, however, a...
متن کاملDiscussion of “correlated Variables in Regression: Clustering and Sparse Estimation”
Y = Xβ + ε. Here Y is the response vector in Rn, X is an n × p matrix, β0 ∈ Rp is the vector of coefficients, and finally ε ∈ Rn is assumed to be multivariate normal with mean zero and covariance matrix σ2I. While it has been shown that the lasso, and its many variants, “work” in terms of variable selection and prediction, they work best for near orthogonal cases of X. However, if p > n, correl...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: SSRN Electronic Journal
سال: 2016
ISSN: 1556-5068
DOI: 10.2139/ssrn.2811484